Event Detection (ED) is the task of identifying and classifying trigger words of event mentions in text. Despite considerable research efforts in recent years for English text, the task of ED in other languages has been significantly less explored. Switching to non-English languages, important research questions for ED include how well existing ED models perform on different languages, how challenging ED is in other languages, and how well ED knowledge and annotation can be transferred across languages. To answer those questions, it is crucial to obtain multilingual ED datasets that provide consistent event annotation for multiple languages. There exist some multilingual ED datasets; however, they tend to cover a handful of languages and mainly focus on popular ones. Many languages are not covered in existing multilingual ED datasets. In addition, the current datasets are often small and not accessible to the public. To overcome those shortcomings, we introduce a new large-scale multilingual dataset for ED (called MINION) that consistently annotates events for 8 different languages; 5 of them have not been supported by existing multilingual datasets. We also perform extensive experiments and analysis to demonstrate the challenges and transferability of ED across languages in MINION that in all call for more research effort in this area.
translated by 谷歌翻译
Event Extraction (EE) is one of the fundamental tasks in Information Extraction (IE) that aims to recognize event mentions and their arguments (i.e., participants) from text. Due to its importance, extensive methods and resources have been developed for Event Extraction. However, one limitation of current research for EE involves the under-exploration for non-English languages in which the lack of high-quality multilingual EE datasets for model training and evaluation has been the main hindrance. To address this limitation, we propose a novel Multilingual Event Extraction dataset (MEE) that provides annotation for more than 50K event mentions in 8 typologically different languages. MEE comprehensively annotates data for entity mentions, event triggers and event arguments. We conduct extensive experiments on the proposed dataset to reveal challenges and opportunities for multilingual EE.
translated by 谷歌翻译
In this paper, we introduce a novel concept of user-entity differential privacy (UeDP) to provide formal privacy protection simultaneously to both sensitive entities in textual data and data owners in learning natural language models (NLMs). To preserve UeDP, we developed a novel algorithm, called UeDP-Alg, optimizing the trade-off between privacy loss and model utility with a tight sensitivity bound derived from seamlessly combining user and sensitive entity sampling processes. An extensive theoretical analysis and evaluation show that our UeDP-Alg outperforms baseline approaches in model utility under the same privacy budget consumption on several NLM tasks, using benchmark datasets.
translated by 谷歌翻译
流视频是创作者与观众分享创意作品的方法之一。在这些视频中,流媒体分享了如何通过在一个或几个用于创意项目的程序中使用各种工具来实现最终目标。为此,可以讨论实现最终目标所需的步骤。因此,这些视频可以提供大量的教育内容,这些内容可用于学习如何使用流媒体使用的工具。但是,缺点之一是,流媒体可能无法为每个步骤提供足够的详细信息。因此,对于学习者来说,可能很难赶上所有步骤。为了减轻此问题,一种解决方案是将流视频与流视频中使用的工具可用的相关教程联系起来。更具体地说,系统可以分析实时流媒体视频的内容,并推荐最相关的教程。由于现有的文档推荐模型无法处理这种情况,因此在这项工作中,我们为实时流程视频的教程建议提供了一个新颖的数据集和模型。我们对拟议的数据集和模型进行了广泛的分析,揭示了该任务的挑战性质。
translated by 谷歌翻译
键形提取是NLP中文档理解的重要任务之一。虽然大多数先前的作品都致力于正式设置,例如书籍,新闻或网络博客,但探索视频成绩单等非正式文本的探索较少。为了解决这一局限性,在这项工作中,我们提出了一种新颖的语料库和方法,用于从Behance平台上流的视频的成绩单中提取钥匙短语。更具体地说,在这项工作中,提出了一种新型的数据增强,以通过从其他域中提取键形提取任务的背景知识来丰富模型。提出的数据集数据集上的广泛实验显示了引入方法的有效性。
translated by 谷歌翻译
我们建议对视觉模型预处理的基于利润的损失,以鼓励基于梯度的解释,这些解释与区域级注释一致。我们将该目标称为注意面罩的一致性(AMC),并证明它与依赖于区域级注释的模型相比,它产生了卓越的视觉接地性能,以显式训练对象检测器,例如更快的R-CNN。 AMC通过鼓励基于梯度的解释掩盖来工作,该掩盖的注意力分数主要集中在包含这种注释的图像的注释区域中。尤其是,在标准视觉建模目标之上接受AMC训练的模型在FlickR30K视觉接地基准中获得了86.59%的最新精度,与最佳先前模型相比,绝对改善了5.48%。我们的方法在既定的基准中都表现出表达理解,并通过设计基于梯度的解释来更好地与人类注释保持一致,从而提供了极大的表现。
translated by 谷歌翻译
基于方面的情感分析(ABSA)是一个自然语言处理问题,需要分析用户生成的评论以确定:a)审查的目标实体,b)其所属的高级方面,c)对目标和方面表达的情绪。 ABSA的许多但分散的语料库使研究人员很难快速识别最适合特定ABSA子任务的Corpora。这项研究旨在介绍一个可用于培训和评估自动级ABSA系统的语料库数据库。此外,我们还概述了有关各种ABSA及其子任务的主要语料库,并突出了研究人员在选择语料库时应考虑的几个语料库功能。我们得出结论,需要进一步的大规模ABSA语料库。此外,由于每个语料库的构建方式都不同,因此研究人员在许多语料库上尝试一种新颖的ABSA算法,并且通常只采用一个或几个语料库,这是耗时的。该领域将从ABSA CORPORA的数据标准协议中受益。最后,我们讨论当前收集方法的优势和缺点,并为将来的ABSA数据集收集提出建议。
translated by 谷歌翻译
In inverse reinforcement learning (IRL), a learning agent infers a reward function encoding the underlying task using demonstrations from experts. However, many existing IRL techniques make the often unrealistic assumption that the agent has access to full information about the environment. We remove this assumption by developing an algorithm for IRL in partially observable Markov decision processes (POMDPs). We address two limitations of existing IRL techniques. First, they require an excessive amount of data due to the information asymmetry between the expert and the learner. Second, most of these IRL techniques require solving the computationally intractable forward problem -- computing an optimal policy given a reward function -- in POMDPs. The developed algorithm reduces the information asymmetry while increasing the data efficiency by incorporating task specifications expressed in temporal logic into IRL. Such specifications may be interpreted as side information available to the learner a priori in addition to the demonstrations. Further, the algorithm avoids a common source of algorithmic complexity by building on causal entropy as the measure of the likelihood of the demonstrations as opposed to entropy. Nevertheless, the resulting problem is nonconvex due to the so-called forward problem. We solve the intrinsic nonconvexity of the forward problem in a scalable manner through a sequential linear programming scheme that guarantees to converge to a locally optimal policy. In a series of examples, including experiments in a high-fidelity Unity simulator, we demonstrate that even with a limited amount of data and POMDPs with tens of thousands of states, our algorithm learns reward functions and policies that satisfy the task while inducing similar behavior to the expert by leveraging the provided side information.
translated by 谷歌翻译
即使有效,模型的使用也必须伴随着转换数据的各个级别的理解(上游和下游)。因此,需求增加以定义单个数据与算法可以根据其分析可以做出的选择(例如,一种产品或一种促销报价的建议,或代表风险的保险费率)。模型用户必须确保模型不会区分,并且也可以解释其结果。本文介绍了模型解释的重要性,并解决了模型透明度的概念。在保险环境中,它专门说明了如何使用某些工具来强制执行当今可以利用机器学习的精算模型的控制。在一个简单的汽车保险中损失频率估计的示例中,我们展示了一些解释性方法的兴趣,以适应目标受众的解释。
translated by 谷歌翻译
我们重新审视NPBG,这是一种流行的新型视图合成方法,引入了无处不在的点神经渲染范式。我们对具有快速视图合成的数据效率学习特别感兴趣。除前景/背景场景渲染分裂以及改善的损失外,我们还通过基于视图的网状点描述符栅格化来实现这一目标。通过仅在一个场景上训练,我们的表现就超过了在扫描仪上接受过培训的NPBG,然后进行了填充场景。我们还针对最先进的方法SVS进行了竞争性,该方法已在完整的数据集(DTU,坦克和寺庙)上进行了培训,然后进行了对现场的培训,尽管它们具有更深的神经渲染器。
translated by 谷歌翻译